Vulnerabilities
Study Reveals Vulnerabilities in Large Language Models
A recent study reveals that large language models are vulnerable to jailbreak attacks, prompting researchers to propose a new defense framework called AutoDefense to enhance their security.